# Reinforcement Learning

Search R1
Search-R1 is a reinforcement learning framework designed to train large language models (LLMs) capable of reasoning and calling search engines. Built upon veRL, it supports various reinforcement learning methods and different LLM architectures, enabling efficiency and scalability in tool-augmented reasoning research and development.
Model Training and Deployment
39.2K

D1
This model improves the reasoning capabilities of diffusion large language models through reinforcement learning and masked self-supervised fine-tuning with high-quality reasoning trajectories. The importance of this technology lies in its ability to optimize the model's reasoning process, reduce computational costs, while ensuring the stability of learning dynamics. Suitable for users who want to improve efficiency in writing and reasoning tasks.
Writing Assistant
42.2K

Deepcoder
DeepCoder-14B-Preview is a large language model for code reasoning based on reinforcement learning. It can handle long contexts, boasts a 60.6% pass rate, and is suitable for programming tasks and automated code generation. The model's advantage lies in its innovative training method, providing superior performance compared to other models, and being completely open-source, supporting a wide range of community applications and research.
Coding Assistant
37.8K
Chinese Picks

Hunyuan T1
HunYuan T1 is a large-scale reasoning model launched by Tencent, based on reinforcement learning technology, significantly improving reasoning capabilities through extensive post-training. It excels in long text processing and context capture, while optimizing the consumption of computing resources, thus possessing efficient reasoning capabilities. It is suitable for various reasoning tasks, and particularly excels in mathematics and logical reasoning. This product is based on deep learning and continuously optimized with actual feedback, suitable for applications in various fields such as scientific research and education.
AI Model
65.4K
Chinese Picks

Hunyuan T1
HunYuan T1 is a deep reasoning large model based on reinforcement learning, launched by Tencent. Through extensive post-training and alignment with human preferences, it significantly improves reasoning ability and efficiency. The product is based on a large-scale Hybrid-Transformer-Mamba MoE architecture, enabling the model to perform better when handling long texts. Suitable for various users who need complex reasoning and logical solutions, assisting scientific research and technological development.
AI Model
73.7K

Hunyuan T1
HunYuan T1 is an ultra-large-scale reasoning model based on reinforcement learning. Post-training significantly improves reasoning ability and aligns with human preferences. This model focuses on long-text processing and complex reasoning tasks, exhibiting significant performance advantages.
Artificial Intelligence
44.7K

Light R1 14B DS
Light-R1-14B-DS is an open-source mathematical model developed by Qihoo 360 Technology Co., Ltd. Trained using reinforcement learning based on DeepSeek-R1-Distill-Qwen-14B, it achieved high scores of 74.0 and 60.2 on the AIME24 and AIME25 mathematics competition benchmarks, respectively, surpassing many 32B parameter models. It successfully implemented reinforcement learning on an already long-chain reasoning fine-tuned model under a lightweight budget, providing the open-source community with a powerful mathematical model tool. Its open-source nature promotes the application of natural language processing in education, particularly in mathematical problem-solving, offering researchers and developers valuable research foundations and practical tools.
AI Model
65.7K

Light R1
Light-R1 is an open-source project developed by Qihoo360, aiming to train long-chain reasoning models through curriculum-style supervised fine-tuning (SFT), direct preference optimization (DPO), and reinforcement learning (RL). This project achieves long-chain reasoning capabilities from scratch through decontaminated datasets and efficient training methods. Its main advantages include open-source training data, low-cost training, and excellent performance in mathematical reasoning. The project background is based on the current training needs of long-chain reasoning models, aiming to provide a transparent and reproducible training method. The project is currently free and open-source, suitable for research institutions and developers.
Model Training and Deployment
75.1K

R1 Omni
R1-Omni is an innovative multimodal emotion recognition model that enhances model reasoning and generalization capabilities through reinforcement learning. Developed based on HumanOmni-0.5B, it focuses on emotion recognition tasks and can perform emotion analysis using visual and audio modal information. Its main advantages include strong reasoning capabilities, significantly improved emotion recognition performance, and excellent performance on out-of-distribution data. This model is suitable for scenarios requiring multimodal understanding, such as sentiment analysis and intelligent customer service, and has significant research and application value.
Emotional companionship
80.6K

Steiner 32b Preview
Steiner is a series of reasoning models developed by Yichao 'Peak' Ji, focusing on training on synthetic data through reinforcement learning, capable of exploring multiple paths and autonomously verifying or retracing during reasoning. The model aims to replicate the reasoning capabilities of OpenAI o1 and verify the scaling curve during reasoning. Steiner-preview is an ongoing project, and its open-source nature aims to share knowledge and obtain feedback from more real users. Although the model performs well in some benchmark tests, it has not yet fully achieved the reasoning scaling capabilities of OpenAI o1 and is therefore still under development.
AI Model
67.3K

Notagen
NotaGen is an innovative symbolic music generation model that enhances music generation quality through three stages: pre-training, fine-tuning, and reinforcement learning. Utilizing large language model technology, it can generate high-quality classical music scores, bringing new possibilities to music creation. The model's main advantages include efficient generation, diverse styles, and high-quality output. It is applicable in music creation, education, and research, with broad application prospects.
Music Generation
113.2K

SWE RL
SWE-RL is a reinforcement learning-based large language model reasoning technique proposed by Facebook Research, aiming to leverage open-source software evolution data to improve model performance in software engineering tasks. This technology optimizes the model's reasoning capabilities through a rule-driven reward mechanism, enabling it to better understand and generate high-quality code. The main advantages of SWE-RL lie in its innovative reinforcement learning approach and effective utilization of open-source data, opening up new possibilities in the field of software engineering. The technology is currently in the research phase and does not yet have a defined commercial pricing, but it shows significant potential in improving development efficiency and code quality.
Coding Assistant
53.3K

Mlgym
MLGym is an open-source framework and benchmark developed by Meta's GenAI team and the UCSB NLP team for training and evaluating AI research agents. By offering diverse AI research tasks, it fosters the development of reinforcement learning algorithms and helps researchers train and evaluate models in real-world research scenarios. The framework supports various tasks, including computer vision, natural language processing, and reinforcement learning, aiming to provide a standardized testing platform for AI research.
Model Training and Deployment
52.2K

VLM R1
VLM-R1 is a reinforcement learning-based visual-language model focused on visual understanding tasks, such as Referring Expression Comprehension (REC). By combining Reinforcement Learning (R1) and Supervised Fine-Tuning (SFT) methods, this model demonstrates excellent performance on both in-domain and out-of-domain data. The main advantages of VLM-R1 include its stability and generalization ability, enabling it to excel in various visual-language tasks. Built upon Qwen2.5-VL, the model leverages advanced deep learning techniques like Flash Attention 2 to enhance computational efficiency. VLM-R1 aims to provide an efficient and reliable solution for visual-language tasks, suitable for applications requiring precise visual understanding.
AI Model
60.7K

Novasky
NovaSky is an AI technology platform dedicated to enhancing the performance of code generation and inference models. It significantly improves the performance of non-inference models, making them excel in the field of code generation, through innovative test-time expansion techniques (such as S*) and reinforcement learning distillation inference. The platform is committed to providing developers with efficient and low-cost model training and optimization solutions, helping them achieve higher efficiency and accuracy in programming tasks. NovaSky's technical background originates from Sky Computing Lab @ Berkeley, with strong academic support and cutting-edge technology research foundation. Currently, NovaSky offers a variety of model optimization methods, including but not limited to inference cost optimization and model distillation techniques, to meet the needs of different developers.
Development & Tools
54.4K

Alphamaze
AlphaMaze is a decoder language model designed specifically for solving visual reasoning tasks. It demonstrates the potential of language models in visual reasoning through training on maze-solving tasks. The model is built upon the 1.5 billion parameter Qwen model and is trained with Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). Its main advantage lies in its ability to transform visual tasks into text format for reasoning, thereby compensating for the lack of spatial understanding in traditional language models. The development background of this model is to improve AI performance in visual tasks, especially in scenarios requiring step-by-step reasoning. Currently, AlphaMaze is a research project, and its commercial pricing and market positioning have not yet been clearly defined.
AI Model
46.4K

Homietele
HOMIEtele is an innovative teleoperation solution designed for humanoid robots, leveraging reinforcement learning and low-cost exoskeleton hardware to achieve precise walking and manipulation. Its significance lies in addressing the inefficiencies and instability of traditional teleoperation systems. By utilizing human motion capture and a reinforcement learning training framework, HOMIEtele enables robots to perform complex tasks more naturally. Key advantages include efficient task completion, elimination of the need for complex motion capture equipment, and rapid training times. Primarily targeting robotics research institutions, manufacturing, and logistics industries, the price isn't publicly available, but its low-cost hardware system offers high cost-effectiveness.
Robots
51.3K

Deepscaler 1.5B Preview
DeepScaleR-1.5B-Preview is a large language model optimized by reinforcement learning, dedicated to enhancing the capabilities of solving mathematical problems. It achieves significant improvements in accuracy within long-text inference scenarios, driven by distributed reinforcement learning algorithms. Key advantages include efficient training strategies, notable performance gains, and the flexibility of open-source availability. Developed by the Sky Computing Lab and Berkeley AI Research team at the University of California, Berkeley, this model aims to advance the application of artificial intelligence in education, especially in mathematics education and competitive mathematics. Available under the MIT open-source license, it is completely free for researchers and developers to use.
Education
77.6K

R1 V
R1-V is a project focused on enhancing the generalization capabilities of visual language models (VLMs). Using verified reward reinforcement learning (RLVR) technology, it significantly improves the generalization abilities of VLMs in visual counting tasks, particularly excelling in out-of-distribution (OOD) tests. The significance of this technology lies in its ability to efficiently optimize large-scale models at an extremely low cost (training costs as low as $2.62), offering new insights into the practical applications of visual language models. The project is based on improvements to existing VLM training methods, aiming to enhance model performance in complex visual tasks through innovative training strategies. Its open-source nature also makes it a vital resource for researchers and developers exploring and applying advanced VLM technologies.
AI Model
66.5K
Fresh Picks

Tülu 3 405B
Tülu 3 405B is an open-source language model developed by the Allen Institute for AI, featuring 405 billion parameters. The model's performance is enhanced by an innovative reinforcement learning framework (RLVR), excelling particularly in mathematical and instruction-following tasks. It is optimized based on the Llama-405B model, employing techniques such as supervised fine-tuning and preference optimization. The open-source nature of Tülu 3 405B makes it a powerful tool in research and development, suitable for various applications requiring high-performance language models.
AI Model
105.7K

CUA
The Computer-Using Agent (CUA) is an advanced AI model developed by OpenAI, combining the visual capabilities of GPT-4o with advanced reasoning through reinforcement learning. It can interact with graphical user interfaces (GUIs) like a human, without relying on specific operating system APIs or web interfaces. The flexibility of CUA enables it to perform tasks in various digital environments, such as filling out forms and browsing the web. The emergence of this technology marks the next step in AI development, opening new possibilities for AI applications in everyday tools. CUA is currently in a research preview phase and is available for use by Pro users in the United States through Operator.
Personal Assistance
73.1K

Deepseek R1 Distill Qwen 1.5B
Developed by the DeepSeek team, the DeepSeek-R1-Distill-Qwen-1.5B is an open-source language model optimized through distillation based on the Qwen2.5 series. This model significantly enhances inference capabilities and performance through large-scale reinforcement learning and data distillation techniques while maintaining a compact model size. It excels in various benchmark tests, especially in mathematics, code generation, and reasoning tasks. The model supports commercial use and allows users to modify and develop derivative works, making it ideal for research institutions and enterprises looking to create high-performance natural language processing applications.
AI Model
217.5K

Deepseek R1 Distill Qwen 7B
DeepSeek-R1-Distill-Qwen-7B is a reinforcement learning-optimized reasoning model distilled from Qwen-7B. It excels in mathematical, coding, and reasoning tasks, generating high-quality reasoning chains and solutions. This model significantly enhances reasoning capabilities and efficiency through large-scale reinforcement learning and data distillation techniques, making it suitable for scenarios requiring complex reasoning and logical analysis.
Model Training and Deployment
143.5K

Deepseek R1 Distill Qwen 14B
DeepSeek-R1-Distill-Qwen-14B is a distilled model developed by the DeepSeek team based on Qwen-14B, focusing on inference and text generation tasks. This model significantly enhances inference capability and generation quality through large-scale reinforcement learning and data distillation techniques while reducing computational resource requirements. Its main advantages include high performance, low resource consumption, and broad applicability, making it suitable for scenarios requiring efficient inference and text generation.
AI Model
277.1K

Deepseek R1 Distill Qwen 32B
DeepSeek-R1-Distill-Qwen-32B, developed by the DeepSeek team, is a high-performance language model optimized through distillation based on the Qwen-2.5 series. The model has excelled in multiple benchmark tests, especially in mathematical, coding, and reasoning tasks. Its key advantages include efficient inference capabilities, robust multilingual support, and open-source features facilitating secondary development and application by researchers and developers. It is suited to any scenario requiring high-performance text generation, such as intelligent customer service, content creation, and code assistance, making it versatile for various applications.
Model Training and Deployment
117.0K

Deepseek R1 Distill Llama 70B
DeepSeek-R1-Distill-Llama-70B is a large language model developed by the DeepSeek team, based on the Llama-70B architecture and optimized through reinforcement learning. It excels in reasoning, dialogue, and multilingual tasks, supporting diverse applications such as code generation, mathematical reasoning, and natural language processing. Its primary advantages include efficient reasoning capabilities and problem-solving skills for complex tasks, while also supporting both open-source and commercial use. This model is suitable for enterprises and research institutions that require high-performance language generation and reasoning abilities.
AI Model
82.5K

Pasa
PaSa is an advanced academic paper search agent developed by ByteDance, based on large language model (LLM) technology. It can autonomously invoke search tools, read papers, and filter relevant references to obtain comprehensive and accurate results for complex academic queries. This technology is optimized through reinforcement learning, trained using the synthetic dataset AutoScholarQuery, and has shown outstanding performance on the real-world query dataset RealScholarQuery, significantly outperforming traditional search engines and GPT-based methods. The main advantages of PaSa lie in its high recall and precision rates, providing researchers with a more efficient academic search experience.
AI search
73.4K
Chinese Picks

Kimi K1.5
Kimi k1.5, developed by MoonshotAI, is a multimodal language model that significantly enhances performance in complex reasoning tasks through reinforcement learning and long-context extension techniques. The model has achieved industry-leading results on several benchmark tests, surpassing GPT-4o and Claude Sonnet 3.5 in mathematical reasoning tasks such as AIME and MATH-500. Its primary advantages include an efficient training framework, strong multimodal reasoning capabilities, and support for long contexts. Kimi k1.5 is mainly aimed at application scenarios requiring complex reasoning and logical analysis, such as programming assistance, mathematical problem-solving, and code generation.
Model Training and Deployment
253.9K
Chinese Picks

Deepseek R1 Zero
DeepSeek-R1-Zero is an inference model developed by the DeepSeek team, focusing on enhancing inference capabilities through reinforcement learning. This model exhibits powerful reasoning behaviors such as self-validation, reflection, and generating long chains of reasoning without requiring supervised fine-tuning. Its main advantages include efficient inference capabilities, immediate usability without pre-training, and outstanding performance in mathematical, coding, and reasoning tasks. The model is built on the DeepSeek-V3 architecture and is suitable for large-scale inference tasks in both research and commercial applications.
AI Model
86.7K
Chinese Picks

Deepseek R1
DeepSeek-R1, launched by the DeepSeek team, is the first generation inference model that exhibits exceptional inference capabilities through extensive reinforcement learning training, eliminating the need for supervised fine-tuning. The model excels in mathematical, coding, and reasoning tasks, comparable to the OpenAI-o1 model. Additionally, DeepSeek-R1 offers various distilled models catering to different scalability and performance requirements. Its open-source nature provides robust tools for the research community, supporting commercial use and further development.
AI Model
451.8K
- 1
- 2
- 3
Featured AI Tools

Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
42.0K

Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
44.4K

Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
41.7K

Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
42.8K
Chinese Picks

Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
41.4K

Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
42.0K

Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
41.1K
Chinese Picks

Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M